connected autonomous vehicle
An End-to-End Collaborative Learning Approach for Connected Autonomous Vehicles in Occluded Scenarios
Parada, Leandro, Tian, Hanlin, Escribano, Jose, Angeloudis, Panagiotis
Collaborative navigation becomes essential in situations of occluded scenarios in autonomous driving where independent driving policies are likely to lead to collisions. One promising approach to address this issue is through the use of Vehicle-to-Vehicle (V2V) networks that allow for the sharing of perception information with nearby agents, preventing catastrophic accidents. In this article, we propose a collaborative control method based on a V2V network for sharing compressed LiDAR features and employing Proximal Policy Optimisation to train safe and efficient navigation policies. Unlike previous approaches that rely on expert data (behaviour cloning), our proposed approach learns the multi-agent policies directly from experience in the occluded environment, while effectively meeting bandwidth limitations. The proposed method first prepossesses LiDAR point cloud data to obtain meaningful features through a convolutional neural network and then shares them with nearby CAVs to alert for potentially dangerous situations. To evaluate the proposed method, we developed an occluded intersection gym environment based on the CARLA autonomous driving simulator, allowing real-time data sharing among agents. Our experimental results demonstrate the consistent superiority of our collaborative control method over an independent reinforcement learning method and a cooperative early fusion method.
- North America > United States (0.14)
- Europe > United Kingdom (0.14)
- Transportation > Ground > Road (1.00)
- Automobiles & Trucks (0.88)
Safe and Robust Multi-Agent Reinforcement Learning for Connected Autonomous Vehicles under State Perturbations
Zhang, Zhili, Sun, Yanchao, Huang, Furong, Miao, Fei
Sensing and communication technologies have enhanced learning-based decision making methodologies for multi-agent systems such as connected autonomous vehicles (CAV). However, most existing safe reinforcement learning based methods assume accurate state information. It remains challenging to achieve safety requirement under state uncertainties for CAVs, considering the noisy sensor measurements and the vulnerability of communication channels. In this work, we propose a Robust Multi-Agent Proximal Policy Optimization with robust Safety Shield (SR-MAPPO) for CAVs in various driving scenarios. Both robust MARL algorithm and control barrier function (CBF)-based safety shield are used in our approach to cope with the perturbed or uncertain state inputs. The robust policy is trained with a worst-case Q function regularization module that pursues higher lower-bounded reward in the former, whereas the latter, i.e., the robust CBF safety shield accounts for CAVs' collision-free constraints in complicated driving scenarios with even perturbed vehicle state information. We validate the advantages of SR-MAPPO in robustness and safety and compare it with baselines under different driving and state perturbation scenarios in CARLA simulator. The SR-MAPPO policy is verified to maintain higher safety rates and efficiency (reward) when threatened by both state perturbations and unconnected vehicles' dangerous behaviors.
Fleet of six driverless Ford Mondeos to be tested in Oxford for the next five months
A fleet of six self-driving Ford Mondeos will be navigating the streets of Oxford in all hours and all weathers to test the abilities of driverless cars as part of a new trial. Technology firm Oxbotica, spun out of an Oxford University project, has retrofitted the vehicles which are following a nine-mile round trip within the city. A dozen cameras, three Lidar sensors, two radar sensors are used to put the car at'level 4 autonomy', meaning it can handle almost all situations itself. A person needs to be in the driving seat by law, but they won't be touching the steering wheel or pedals, the driverless car will be'taking them for a ride'. The Oxford trial is part of the UK government-backed £12.3 million Endeavour project, set up to try deploying a fleet of self-driving cars in several cities.
- Automobiles & Trucks (0.99)
- Transportation > Passenger (0.84)
- Transportation > Ground > Road (0.84)
- Information Technology > Robotics & Automation (0.84)
Behavior Planning For Connected Autonomous Vehicles Using Feedback Deep Reinforcement Learning
With the development of communication technologies, connected autonomous vehicles (CAVs) can share information with each other. Besides basic safety messages, they can also share their future plan. We propose a behavior planning method for CAVs to decide whether to change lane or keep lane based on the information received from neighbors and a policy learned by deep reinforcement learning (DRL). Our state design based on shared information is scalable to the number of vehicles. The proposed feedback deep Q-learning algorithms integrate the policy learning process with a continuous state space controller, which in turn gives feedback about actions and rewards to the learning process. We design both centralized and distributed DRL algorithms. In experiments, our behavior planning method can help increase traffic flow and driving comfort compared with a traditional rule-based control method. It also shows the distributed learning result is comparable to the centralized learning result, which reveals the possibility of improving the policy of behavior planning online. We also validate our algorithm in a more complicated scenario where there are two road closures on a freeway.
- Automobiles & Trucks (1.00)
- Transportation > Ground > Road (0.93)